Step-size Estimation for Unconstrained Optimization Methods
نویسندگان
چکیده
Some computable schemes for descent methods without line search are proposed. Convergence properties are presented. Numerical experiments concerning large scale unconstrained minimization problems are reported. Mathematical subject classification: 90C30, 65K05, 49M37.
منابع مشابه
A Free Line Search Steepest Descent Method for Solving Unconstrained Optimization Problems
In this paper, we solve unconstrained optimization problem using a free line search steepest descent method. First, we propose a double parameter scaled quasi Newton formula for calculating an approximation of the Hessian matrix. The approximation obtained from this formula is a positive definite matrix that is satisfied in the standard secant relation. We also show that the largest eigen value...
متن کاملAn efficient improvement of the Newton method for solving nonconvex optimization problems
Newton method is one of the most famous numerical methods among the line search methods to minimize functions. It is well known that the search direction and step length play important roles in this class of methods to solve optimization problems. In this investigation, a new modification of the Newton method to solve unconstrained optimization problems is presented. The significant ...
متن کاملA Parallel Block Scaled Gradient Method with Decentralized Step-size for Block Additive Unconstrained Optimization Problems of Large Distributed Systems
In this paper, we propose a modified parallel block scaled gradient method for solving block additive unconstrained optimization problems of large distributed systems. Our method makes two major modifications to the typical parallel block scaled gradient method: First, we include a pre-processing step which reduces the computational time; second, we propose a decentralized Armijo-type step-size...
متن کاملSmooth strongly convex interpolation and exact worst-case performance of first-order methods
We show that the exact worst-case performance of fixed-step first-order methods for unconstrained optimization of smooth (possibly strongly) convex functions can be obtained by solving convex programs. Finding the worst-case performance of a black-box first-order method is formulated as an optimization problem over a set of smooth (strongly) convex functions and initial conditions. We develop c...
متن کاملA New Hybrid Conjugate Gradient Method Based on Eigenvalue Analysis for Unconstrained Optimization Problems
In this paper, two extended three-term conjugate gradient methods based on the Liu-Storey ({tt LS}) conjugate gradient method are presented to solve unconstrained optimization problems. A remarkable property of the proposed methods is that the search direction always satisfies the sufficient descent condition independent of line search method, based on eigenvalue analysis. The globa...
متن کامل